Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror

Comment Re:"probably. We're not 100% sure about it...." (Score 1) 130

It explicitly notes that "correlation between next-word prediction... and brain alignment fades once models surpass human language proficiency."

There's a hypothesis, but we don't know because they haven't surpassed human language proficiency. Not even close.

Just...no. You are confusing general intelligence with predictive accuracy. The paper defines proficiency specifically as next-token prediction (perplexity). If you had read the paper, you would have known this. How's that for an hypothesis? You don't get to dismiss an argument if you can't even get what you are dismissing right. In that specific metric, LLMs have mathematically surpassed the average human (see Shlegeris et al., 2022, cited in the paper). Unlike you, the paper isn't fantasizing about a future sci-fi AI; it is presenting empirical data on current models (like Pythia-6.9B). It measures that as their predictive accuracy exceeds human baselines, their internal processing mechanisms diverge from human brain activity. It’s a measured fact, not an hypothesis. Thwok -- ball's in your court.

Comment Re:good to have more tools (Score 1) 7

It's always nice to get more information from existing signals. There's already lots of work being done to find meteorites and reentry debris in weather radar signals.

Yep. Weather radar is already the found money channel for meteors and reentry junk, so squeezing a little more truth out of existing signals is always a win.
What made me grin about this article is how old-school the core idea is -- the Mach cone hitting the ground looks like a hyperbola if you plot the difference in arrival times of the signal at any sensors in the debris path. All it takes is some high school level algebra (conic sections ftw!) and a little creative thinking. I once caught a public talk at the Pima Air & Space Museum where an electrical engineer walked us through using USGS seismic stations to suss out a hypersonic track of something flying very fast across the desert southwest. He basically used sonic boom footprints based on this same idea about signal arrival times. His slide deck strongly suggested something doing Mach 6+ between Groom Lake in Nevada and Deer Island in southern California every couple of weeks. The punchline: he gave the whole talk standing under the wing of the museum’s SR-71. I asked during Q&A if that was a coincidence. He smiled: “Nope, not a coincidence.” (Crowd laughed, because of course they did -- Aurora and the X-39 were open secrets at the time.)

Comment don't miss this... (Score 2) 20

oblig XKCD

NOAA says we’ve hit G4 (Severe) geomagnetic storm levels, with a G4 watch still in play, plus an S4 (Severe) radiation storm riding along for the fun parts of physics that don’t care about our weekend plans.

I’ve been lucky enough to catch the aurora a handful of times in my sixty-four years on the planet, mostly back in my Reagan-era Air Force days. The one that branded itself into my brain was San Francisco: sitting in lawn chairs on the roof of my friend Joe’s place in the Marina, six stories up, watching ribbons and waterfall-cascades of light over the Golden Gate like the sky had decided to show off for the bay.

If you’ve never seen an auroral display in it's full glory, put it on your bucket list...let the universe remind you it has an art department.

Comment Re:This "fix" really benefits who? Hint: not rider (Score 1) 171

It's the same reasoning used to demand that people use busses instead of cars in the first place.

No, it isn’t. The serious argument for transit is not “demand that people use buses,” it’s “make transit competitive enough that people choose it.” That’s the entire point of frequency, reliability, safe stops, shade/shelter, and not trapping buses behind single-occupant cars. Framing it as a moral scold is a neat rhetorical shortcut, but it’s not a description of how real transit planning works. It’s a libertarian caricature of it.

Using a bus takes longer and requires that you walk to the stop.

Correct, and that’s exactly why your next leap doesn’t follow. “There is a baseline cost” does not imply “therefore raising that cost is fine.” Saying “people already walk to transit” is not a blank check to increase first/last mile distance, especially when the increase is not evenly survivable across riders, cities, or seasons.

Those things are costs that the passenger has to pay, and advocates of busses fail to treat those costs as important.

That’s just not true unless you define “advocates” as “a guy you got mad at in a city council meeting once.” Waiting time, walking distance, transfers, safety, and reliability are literally the core metrics transit agencies and planners obsess over. They quantify them. They model them. They spend entire budgets trying to reduce them. Pretending nobody treats them as important is how you smuggle in the conclusion that riders’ costs don’t matter, without having to defend it.

Also: “advocates of buses” is doing a lot of work here. If what you mean is “commie-pinko socialists,” you can just say “people who think a functioning city has social obligations,” because that’s what you’re really trying to sneer at.

Once you've reasoned that far, what's different about making people walk an extra block by cutting down stops, compared to making people walk to the stop anyway?

What’s different is that one is the admission price of using the service, and the other is a policy decision to raise that price. “You already pay a fee” is not a principled argument for “we should increase the fee.”

And it’s not “an extra block” in the only places this debate matters. In Tucson or Phoenix in August, on cracked sidewalks, hostile crossings, and zero shade, that “extra block” is not a neutral rounding error. For an elderly rider, a disabled rider, someone carrying groceries, someone with a stroller, someone with a bad heart, it’s a barrier to entry and sometimes a health risk. You don’t get to declare it negligible by imagining a perfectly abled rider in perfect weather and calling that realism.

Here’s the part you’re trying to skate past: if stop consolidation comes with reinvestment, it can be defensible. If the savings buy higher frequency, better reliability, real stops with shade/benches/lighting, safer crossings, and actual priority in traffic, then you can argue it improves generalized cost.

But that’s not what this article is selling. It’s selling “cheap, fast, no politics,” which usually translates to “we made the ledger better by moving costs onto riders.” That was the point of my post -- calling this article out for what it is advocating: a libertarian pipe dream camouflaged as "more efficient government." The article all but had "Approved by DOGE" watermarked across each page.

So yes: walking to transit is a cost. That’s why you don’t casually increase it and call it “efficiency” unless you’re ideologically committed to one specific miracle: the spreadsheet improves while the rider absorbs the consequences.

Comment Rule 34 meets AI, and Steam is rightfully worried (Score 1) 18

Steam’s updated AI disclosure policy is exactly the kind of policy you write when you’ve had to operate a platform at scale: focus on what the player actually consumes, and treat “live generation” as a different beast than “we used AI in the build pipeline.”

If a developer used an AI code helper, who cares? That’s a hammer, not a house. Steam seems to agree: gamers, in general, don't seem to care whether AI was used by devs to build the game.

But “AI content generated during gameplay” absolutely deserves its own checkbox, because it’s not a shipped asset, it’s a content faucet. A static pile of art/dialogue can be reviewed the same way anything else can. A runtime generator can create novel text/images/audio on demand, after release, in combinations nobody tested, reviewed, or even imagined.

And yes: Rule 34 applies. Sad, but true:

If it can be sexualized, it will be.
If it can be used to harass, it will be.
If it can be steered into “oops that’s copyrighted,” it will be.

Not because gamers are uniquely depraved, but because scale turns edge cases into Fox News chyrons and class-action law suits.

From the Steam legal department's perspective, the problem is painfully practical:
1) Their normal review process can’t verify content that doesn’t exist yet.
2) The Steam Distribution Agreement still expects “no illegal/infringing content,” whether it was handcrafted, procedurally generated, or hallucinated into existence at runtime.
3) DMCA-style cleanup is a terrible fit for infinite output. “We’ll patch it later” is not a compliance strategy, it’s a confession that you broke the law.

So Valve draws a clean line:
- Pre-generated AI content: disclose it, and it gets reviewed like any other content.
- Live-generated AI content: disclose it, AND tell them what guardrails you’ve put in place so the thing doesn’t generate illegal content on demand.

That’s not anti-AI. That’s not a “ban.” It’s the minimum viable paperwork you need when you’re selling games in a world where a text box can become a liability generator.

IIf you ship a book, Steam can read the book.
If you ship a game with an AI generator in it, Steam wants the label on the crate and the safety interlocks described, because some players are going to go 4chan with it the moment it boots.

Comment Re:"probably. We're not 100% sure about it...." (Score 1) 130

It's very obvious the neural networks aren't doing what the human brain does.

Yes, and airplanes don't flap their wings. That hasn’t stopped anyone from flying. LLMs don’t model biology—they model language. In fact, the paper actually argues that high-performance LLMs are diverging from human brain activity, not converging. It explicitly notes that "correlation between next-word prediction... and brain alignment fades once models surpass human language proficiency." The paper suggests LLMs solve linguistic tasks using mechanisms distinct from the human brain once they scale past a certain point. So you are right that they function differently, but wrong to imply this makes them ineffective. They are engineering a different path to similar (or superior) linguistic outcomes.

Among other things, humans don't consume the entire internet to string together a coherent sentence. Humans learn to read usually with a single textbook. The difference in information volume is astounding.

Sure—and humans also don’t need to tokenize input or normalize embeddings across 4K context windows. LLMs aren’t trying to replicate how humans learn; they’re engineering a different path to similar linguistic outcomes. Yes, the data scale is immense—but that’s the price of not having a body, a childhood, or a caregiver. LLMs bootstrap from scratch using raw text alone—no multimodal cues, no affective modeling, no theory of mind—just prediction and gradient descent. Given those constraints, it’s astonishing how well they perform.

Also, your “single textbook” theory of human development misses just how scaffolded and embodied natural language acquisition actually is. Kids don’t learn to read from a textbook—they acquire language through years of immersion, social cues, repetition, and multimodal feedback, all before formal literacy even starts. And unlike LLMs, human brains hardwire their primary language in early development—by age six, those linguistic pathways are largely fixed, which is why second-language acquisition is so much harder later in life. LLMs don’t have that bottleneck. You can train them in Hindi, then in English, then in code-switching Hinglish, and they won’t blink. The architecture doesn’t care—it just maps tokens to a Hilbert space, and then converges on a contour in that space. Flexibility like that isn’t a flaw; it’s a feature.

Furthermore, humans don't have a training then a production mode. We are constantly learning, and can modify our brain in real time. The cognitive dissonance is a bit painful, though.

True—humans can learn during deployment, but we’re also really good at forgetting mid-sentence, and updating our priors based on vibes. Humans have distinct developmental phases, and our neuroplasticity drops significantly with age. LLMs don’t have continual learning in the weights (yet), but we already have retrieval-augmented generation (RAG) and adapters (LoRA) that allow real-time updates without full retraining. The architecture for streaming models and fine-tuning during inference is already on the roadmap (see: retrieval-augmented generation, LoRA adapters, and reinforcement learning with human feedback)

Also, cognitive dissonance? You’re swimming in it. You demand biological parity while rejecting the very differences that let these systems scale. You need to pick a lane, here.

Another thing is recursion: human brains can send synapses back and have feedback loops. LLMs don't do that because it makes the training a lot more expensive.

You are confusing architectural recurrence with functional recursion. May I suggest Hofstatder's excellent "Godel, Escher, and Bach" for a lucid, high-school level primer on recursion? You mentioned Hofstatder below; I can see why you are confusing these two concepts if you've only read his "I Am a Strange Loop." without really understanding mathematical recursion. Transformers are indeed feed-forward, but their inference process is auto-regressive—the output of step T feeds back into step T+1. Techniques like chain-of-thought prompting or scratchpad reasoning) explicitly induce recursive reasoning steps.

LLMs are not a strange loop.

Not even Roger Penrose would call that the litmus test for strange loops. At least not in Hofstadter’s strictest sense. But when a system trained on human data starts generating output that humans can’t reliably distinguish from their own thoughts (this is exactly the point Turning makes with his imitation game) and starts studying itself to refine future versions—that’s not far off. Hofstadter and Dennett echo this in "The Mind''s I". And Dennett is even more explicit about this: humans are “a bunch of tricks” running on meat hardware—a statement that’s becoming eerily relevant to LLMs. If Turing showed us how imitation reveals cognition, Hofstadter showed us what it looks like when cognition loops back on itself. Strange loops aren’t measured by the wiring diagram—they emerge from self-reference and representational feedback. And by that standard, today’s LLMs are flirting with loopiness. Turing gave us the imitation game. Hofstadter gave us strange loops. LLMs didn’t ask to be players in either game—but they’re getting eerily good at both.

Comment Re:Higher natgas prices? (Score 1) 62

Gas combined cycle is still beating nearly everyone

"Beating nearly everyone" is doing a lot of heavy lifting in that sentence. Lazard’s unsubsidized new-build ranges have gas combined-cycle at $48–$109/MWh. Utility solar is $38–$78. Onshore wind is $37–$86. Those ranges overlap significantly, and in many markets wind/solar are simply cheaper. You cannot claim gas is beating nearly everyone while ignoring the half of the chart where it plainly isn't.

(solar/wind without storage "doesn't count" since it requires expensive peaker plants for support).

Peakers existed long before modern renewables. They exist because demand peaks due to air conditioners, heat waves, and industrial ramps. Renewables didn’t invent the concept of a peak. Furthermore, the "expensive peaker" you are invoking is, overwhelmingly, a gas plant (Lazard LCOE: $149–$251/MWh). Your argument relies on circular logic: You claim renewables are expensive because they require backing from gas peakers... which are expensive because they burn gas. That isn't an indictment of renewables; it's an indictment of the volatility of the fossil fuel business model.

Also that is bulk power generation.

Correct, and nobody pretended otherwise. Lazard is a generation-cost benchmark. It’s still useful because it answers a very specific question: “What does it cost to build new supply?”

Show me the cheapest commercial and consumer rates and then show me the power breakdown for them. Hint: it's combined natgas and hydroelectric with some coal.

Now you are moving the goalposts from generation costs to retail bills and relying on geography rather than economics. "Cheapest rates" is cherry-picking by design. If you pick states with legacy hydro (Idaho, Washington), of course rates are low. Show me a state with mountains and giant rivers that we dammed 60 years ago, and I’ll show you cheap power. That isn't a scalable national energy strategy; that is geography. Retail rates are also not a pure reflection of generation—they include transmission, debt, storm hardening, and local taxes. If you want a retail-rate debate, start a thread about that. Dragging it in here is a transparent attempt to dodge the actual topic.

Back to the Rhodium report, and the topic you are trying to avoid. The article clearly states that 2025 emissions rose 2.4% largely because a colder winter increased heating demand, and higher natural gas prices pushed the grid back toward coal.

Henry Hub natgas: +58%
Coal generation: +13%
Power-sector emissions: +3.8%

Do you dispute Rhodium’s causal chain (higher gas prices + load growth -> more coal -> higher emissions)?

If you don’t dispute it, then your requests for retail rate comparisons are just a distraction. The data shows that when gas gets pricey, the grid backslides into coal. That is the volatility of your preferred system in action, and is what this thread is actually about; if you don't want to deal with that inconvenient truth, fine, but don't try to distract from it with faulty logic and cheap rhetorical tactics.

Comment Re:Higher natgas prices? (Score 5, Informative) 62

I'm still waiting on actual price comparisons between natgas and solar/wind + storage.

Translation: “I’m still waiting for someone else to do 30 seconds of googling for me.” Lazard (a finance firm, not a Greenpeace pamphlet press) publishes an annual LCOE+ benchmark. Unsubsidized $/MWh ranges for new-build generation in their 2025 report:

Utility solar: $38–$78
Onshore wind: $37–$86
Utility solar + storage: $50–$131
Onshore wind + storage: $44–$123
Gas combined cycle: $48–$109
Coal: $71–$173
Gas peaker: $149–$251

The “price comparison” you claim to be waiting on exists, is mainstream, and it directly contradicts the narrative you are trying to tee up. The data is more than enough to dismantle any claim that renewables are inherently the expensive option. Pretending this data doesn’t exist is a distraction tactic, allowing the discussion to float in vibes where “renewables feel expensive” hide the real story, which is pretty simple: When gas gets expensive, coal looks good in the short term. But In the economic (and ecological) long run, renewables win, and all fossil fuels lose.

Seems like every market that fully embraces renewables experiences high electrical prices despite continued assurances that renewables are cheap.

This is a classic fossil fuel industry bait-and-switch: conflating generation cost with retail billing, then blaming renewables for everything from transmission buildout to legacy regulatory failures. The story we’re discussing is that 2025 U.S. emissions went up. Why they went up is exactly what your request for cost data attempts to obfuscate. The answer is right there in the Rhodium report:

Emissions: +2.4%
Coal generation: +13%
Henry Hub natgas prices: +58% (a key driver of coal coming back onto the margin)
Solar: +34% (zero-emitting sources hit 42% of the grid)

The actual 2025 lesson is clear: when gas gets pricey, the system backslides to coal—causing emissions to jump—unless we build enough clean capacity to absorb the demand. That isn't "renewables making electricity expensive." That is fossil volatility and dispatch economics yet again biting us in the ass. Attempting to drag this thread into a mushy debate about high utility prices in renewable states ignores the hard data in the articles. Fossil fuel generation is a price-volatility machine that defaults to coal when the gas market sneezes. If you want to argue system costs, start a thread about system costs. But using that complexity to hand-wave away a massive spike in coal emissions is transparently dishonest.

Comment This "fix" really benefits who? Hint: not riders (Score 4, Insightful) 171

This article is a libertarian take on public infrastructure, specifically public transportation infrastructure. The article is trying to sell stop consolidation as the magic, low-cost way to speed up buses. This is unsurprising given the source of the article: a libertarian-leaning magazine that reliably treats “efficiency” as a fig leaf for eroding public service and knee-capping social obligations. And yes, the provenance matters. The founding editor of Works In Progress has affiliations with both the Adam Smith Institute and the Mercatus Center, which are mainline conduits for libertarian policy, so the “optimize the system by externalizing the costs” vibe is pretty much expected.

With that said, the underlying physics is real: stopping costs time. Dwell time, decel/accel, re-entering traffic, missed lights. Fine. But the policy pitch is classic principal-agent optimization: it’s “cheap” because it shifts costs off the operator’s ledger and onto the rider’s legs, which is pretty much par for the course when it comes to libertarian fiscal policy -- consequences are not real, but money is. It is also lazy engineering. It avoids the hard work of solving traffic flow (bus lanes, signal priority) and instead chooses the lazy path of making the service worse for the customer to make the spreadsheet look better.

Stop consolidation is a libertarian contractor’s dream because the benefits are concentrated: fewer stops means faster runtimes, cleaner schedules, and better KPIs for agency managers. The budget line looks great. But that’s because they aren’t optimizing transit for riders, they’re optimizing the ledger by externalizing operating costs onto the end-user’s body.

It’s the public transit equivalent of self-checkout at the grocery store: the store fires the cashier and calls it “customer empowerment.” The spreadsheet improves because the shopper is now doing, for free, what the store used to pay someone to do. Same trick here. “Efficiency” gets defined so narrowly that any pain you shove onto the public doesn’t count as a cost. The bus gets “faster” because the rider is doing more work. The budget looks “better” because the rider is paying in sweat, risk, and time that never shows up on the ledger. And here’s where libertarian policy shows up in full: the “savings” aren’t treated as rider benefits to reinvest in frequency, safety, shade, benches, lighting, or reliability. They’re treated as proof that government can do less. The service gets worse, the spreadsheet gets better, and the riders get told to call it “efficiency.”

If you had any doubts about the bias, look at the variables they deliberately ignore. The article breezily claims that walking an extra 500 feet takes only 1.5 to 2.5 minutes, as if every rider is equally abled, unburdened, and climate-proof. To a healthy 25-year-old libertarian consultant, 500 feet is nothing. To an elderly rider in Phoenix, Dallas, or Las Vegas, on a cracked sidewalk in August, it’s a barrier to entry, period. That “negligible” walk can be a health risk.

This is the spherical cow problem in policy form: optimize for an imaginary rider in perfect conditions, then declare the results “efficient.” In the real world, it’s not reform. It’s cost shifting with better branding. This article is just libertarian wishful thinking, not a genuine solution to a real public transit problem.

Submission + - Language models resemble more than just language cortex, show neuroscientists (foommagazine.org)

Gazelle Bay writes: In a paper presented in November 2025 at the Empirical Methods in Natural Language Processing (EMNLP) conference, researchers at the Swiss Federal Institute of Technology (EPFL), the Massachusetts Institute of Technology (MIT), and Georgia Tech revisited earlier findings that showed that language models, the engines of commercial AI chatbots, show strong signal correlations with the human language network, the region of the brain responsible for processing language.

In their new results, they found that signal correlations between model and brain region change significantly over the course of the 'training' process, where models are taught to autocomplete as many as trillions of elided words (or sub-words, known as tokens) from text passages.

The correlations between the signals in the model and the signals in the language network reach their highest levels relatively early on in training. While further training continues to improve the functional performance of the models, it does not increase the correlations with the language network.

The results lend clarity to the surprising picture that has been emerging from the last decade of neuroscience research: That AI programs can show strong resemblances to large-scale brain regions—performing similar functions, and doing so using highly similar signal patterns.

Such resemblances have been exploited by neuroscientists to make much better models of cortical regions. Perhaps more importantly, the links between AI and cortex provide an interpretation of commercial AI technology as being profoundly brain-like, validating both its capabilities as well as the risks it might pose for society as the first ever synthetic braintech.

"It is something we, as a community, need to think about a lot more," said Badr AlKhamissi, doctoral student in neuroscience at EPFL and first author of the preprint, in an interview with Foom. "These models are getting better and better every day. And their similarity to the brain [or brain regions] is also getting better—probably. We're not 100% sure about it."

Comment Re:Hurting overall performance (Score 1) 39

Just no.

You're misrepresenting the case. Whether that’s due to ignorance or bad faith remains to be seen—but you're making claims that only a pirate (or someone who benefits from piracy) would seriously push.

The problem here is that cloudflare is asked to remove the pirate DNS not only from Italy, but from all world!

Not even close. AGCOM isn’t claiming global jurisdiction—though they are stretching the concept of national enforcement to the edge of legality. What they’re actually doing is asserting a narrow, if deeply flawed, principle: if you provide services to Italian users, you must follow Italian law for those users.

Cloudflare isn’t being told to nuke DNS globally. They’re being told to poison DNS responses for Italian users, within 30 minutes of receiving a takedown notice, under penalty of fines. Could Cloudflare geo-fence this? Technically, yes. But they’re (rightly) choosing to challenge it in court—because this is not a normal administrative request. This is state-mandated DNS manipulation, with no meaningful due process, no appeal, and a minimum six-month blackout.

That's not compliance. That’s conscription.

and it is not a court order, is a half ass rule that some groups can make the request just because they want to... think in to the DMCA USA requests, but with even less oversight and trying to apply to all world, not just the country

This is even more misleading. You're right that it’s not a court order. But it does carry the force of law—within Italy—and only within Italy. This isn’t about AGCOM trying to impose Italian law on the world. It’s about whether a domestic administrative body can demand global companies enforce IP address blocking and DNS tampering under threat of financial penalty—without judicial review.

And your DMCA comparison? Apples to turpentine. For all its faults, the DMCA requires sworn statements, allows counterclaims, and provides a mechanism for resolution—even across borders. Piracy Shield offers none of that. It allows private rightsholders to trigger a six-month IP or DNS block just by filing a complaint. No court. No hearing. No evidence. No appeal.

It’s the legal equivalent of “trust me, bro.”

So no, this isn’t Italy enforcing law on “all the world.” It’s something worse: a national regulatory agency trying to dodge EU safeguards, weaponize infrastructure, and hope nobody notices.

Criticize Piracy Shield for what it actually is: a blunt, overreaching censorship mechanism that collides directly with EU law—including the Digital Services Act, the Charter of Fundamental Rights, and the TRIS notification requirements. If you're going to argue in good faith, start there.

Comment Geofencing Meets Legislative Lunacy (Score 2) 39

What we’re really witnessing is the clash between sovereign legal overreach and the cold, scalable reality of global internet infrastructure.

Cloudflare already geo-routes DNS traffic. They already maintain regionally isolated PoPs. Spinning up a DNS resolver farm to handle just Italian DNS queries—and poison those results to comply with AGCOM’s blocklists—is not some Herculean task. It’s annoying, yes. But it’s well within the operational wheelhouse of a company that routes 10% of all HTTP traffic on Earth. And the beauty of such a solution? It would contain the blast radius. Italian users would get filtered DNS. Everyone else gets normal service. And Cloudflare could even win PR points:

“Sorry for the slowness, Italy. We’re trying to follow your government’s censorship orders as responsibly as we can.”

But Cloudflare didn’t take that route. Instead, they escalated—because this isn’t about can’t. It’s about won’t. Cloudflare clearly wants to fight this in the open and let AGCOM defend this legislative overreach in public. I think Cloudflare will win, actually, for a couple of reasons.

First, this case can (and probably will) be escalated to the European Court of Justice. Italy's Piracy Shield law is already on shaky ground, legally speaking. Its 30-minute takedown mandates and six-month mandatory IP blocking provisions violate both the EU’s Digital Services Act and the EU Charter of Fundamental Rights. Those two conflicts alone give the European Commission plenty of reason to scrutinize Piracy Shield—and back Cloudflare if it goes to court.

Second—and for my money, what ultimately kills the law—is the EU’s Technical Regulation Information System (TRIS), the transparency mechanism under Directive (EU) 2015/1535. EU member states are required to notify the Commission of any draft technical regulations affecting the internal market, particularly when they concern information society services. Italy did not notify the Commission before implementing Piracy Shield, even though it obviously impacts CDNs, DNS providers, and ISPs. That omission renders the law unenforceable under EU law. Any measure passed in breach of TRIS cannot be enforced against third parties.

And not to put too fine a point on it, but there’s a reason Italy skipped the notification. Piracy Shield is a transparent attempt to conscript infrastructure providers as low-rent copyright cops. AGCOM knew it would get flagged or killed by the Commission before it ever left Parliament. So they pushed it through fast—and hoped nobody would call their bluff.

Don't misconstrue me, here. I'm not defending piracy. Copyright enforcement matters. But so do proportionality, redress mechanisms, and rule-of-law protections. Piracy Shield is legislative thuggery, not justice.

Comment Re:Code is code. (Score 1) 53

This is it exactly.

It’s exactly true only for the easiest failure mode: obvious junk. Nobody needs an AI detector to reject garbage. Kernel maintainers have been rejecting human-generated garbage since before LLMs darkened the software dev community's skies. The hard problem isn’t slop. The hard problem is credible-looking patches that meet the immediate spec, match local style, compile cleanly, and still encode a subtle bug or a long-term maintenance tax.

There's not even any point to trying to figure out whether it's idiot-generated slop or AI-generated slop.

If all you care about is trash vs not-trash, sure. But kernel hygiene is not a single trash can. When a new tool changes the volume and the shape of incoming patches, maintainers need to know what they’re dealing with so they can adapt their defenses: stricter expectations on rationale, smaller patches, clearer commit messages, more benchmarks for perf-sensitive code, and less tolerance for “it works on my machine.” That’s why the ongoing discussion is leaning toward tool disclosure and “show your work,” not a futile attempt at mind-reading whether a bot wrote the diff. Also, provenance isn’t about blame. It’s about correlated risk. If a model tends to repeat certain patterns (same error-handling shape, same locking assumptions, same “looks right” idioms), you can get a whole class of bugs that are individually non-obvious but statistically related. You don’t get to manage that if you deliberately blindfold yourself.

Just figure out whether or not it's slop, and then reject it if it is.

Define “slop” without smuggling in the answer. Is slop code that fails checkpatch? Easy. Is it code that’s correct but needlessly generic, future-proofed into unreadability, and quietly hostile to the next maintainer? That’s the kind of quality failure that doesn’t show up in a diff review unless the author can justify every tradeoff. And that loops back to the real point: the kernel review process assumes the submitter can defend design choices and follow up when reviewers poke holes. An LLM can only participate in that social contract indirectly; if a human can do that, great, then they’re the author and the tool is irrelevant. If the human can’t, the patch is a liability, even if it isn’t AI slop.
 

Comment Re:Code is code. (Score 5, Insightful) 53

[Reply to ConceptJunkie]

Either it meets the high standards required by the kernel team or it doesn't.

That binary sounds great until you remember what “standards” actually means in kernel-land. It’s not just “passes tests” or “meets the spec.” The spec is the easy part. The standards also include: does it fit the subsystem’s design, does it avoid cleverness debt, does it behave across a zoo of arches/configs, does it keep the fast path fast, and can future maintainers reason about it without taking up candlelit lockdep seances.

Also: kernel review isn’t a theorem prover. It’s a risk-management pipeline with finite reviewer attention. “Meets the standards” is often a judgment call made under time pressure, not a formally verified conclusion.

It doesn't matter if it was written by AI, aliens or Linus himself.

It matters a lot, and you accidentally picked the perfect trio to prove it.

If Linus writes it, Linus can explain it, defend it on the list, revise it when a maintainer says “no, not like that,” and own the fallout for the next decade. Aliens can’t answer review questions. An LLM can’t show up on LKML and say “good catch, here’s why I chose this memory barrier, and here’s the perf data on Zen4 vs Graviton.” Origin matters because accountability matters.

The kernel doesn’t merge diffs. It merges an ongoing relationship with an author who can justify tradeoffs and do follow-up when reality punches the patch in the face. That’s not politics, that’s maintenance. And it’s exactly why the proposed guidance keeps circling around transparency and “make it easy to review,” rather than pretending there’s a magic AI detector.

I use AI tools when coding and I've used it to generate code at times, but I read through it with a fine-toothed comb, test it thoroughly, and don't commit anything I don't 100% understand.

Good. That’s the only sane way to use any generator, including StackOverflow and “I found this gist on a blog from 2013.”

But “100% understand” is where the wheels start to come off. You think you tested it thoroughly, and it is this kind of innocent arrogance that stops a coding career in its tracks. In kernel code, you can understand what the lines say and still miss what they do when the scheduler, the memory model, the compiler, and three architectures start arguing in the hallway. Races, refcount lifetimes, RCU subtleties, error paths, and performance cliffs do not politely announce themselves during your “thorough testing.” Even experts rely on collective review, fuzzing, CI farms, and years of scar tissue because humans are not exhaustive-state-space machines. To claim you you've thoroughly tested anything is arrogance, not expertise. And here’s the AI-specific twist: LLMs are great at producing code that looks like something a careful person would write. That’s not “slop.” That’s plausibly-correct code that can sail through casual review and still be wrong in the exact corner you didn’t think to test. The dangerous patches are the ones that look boring.

I think anyone working on the kernel is easily capable of the same thing.

I think you’re describing the top slice of kernel contributors and then declaring policy based on the best-case. The kernel also has drive-by patches, corporate throughput patches, newbie patches, and “I fixed my one bug, I give zero shits about the downstream" patches. I'm guilty of this last one; I connected a 7.1 surround system via HDMI to my 4090, and watched my display go dark. Why did I have to go under the hood? Because EDID—the ancient, flaky, and apparently immortal Display Data Channel protocol HDMI still uses— for the sake of compatibility lets any device scream "I'm a monitor!" loudly enough to hijack your system, even if your main display is DP. The kernel insisted that the EDID signal was the main display, so it kept trying to throw the desktop to the phantom VGA display that EDID said existed on my LG soundbar. I couldn't reprogram the soundbar; instead I did brain surgery on drivers/gpu/drm/drm_edid.c to filter out my particular soundbar. Works for me; absolutely will not work for 99.99% of the rest of the planet. I'm a sysadmin, not a coder, but I do have a BS in CS, and I can code when I have to. Kernel maintainers could obviously come up with a better hack than I did, but this is a corner case that will never be loud enough to draw the main line's attention. I had to lean heavily on an LLM to even diagnose this problem; but once it converged, it gave me the root cause, and even helpfully suggested the code I needed to patch into the kernel. It took a few tries -- my C fu was never strong, but it *worked.*

In a bucket, if you want an AI posture that’s actually compatible with kernel reality, it’s this:
If you can’t explain it, you don’t own it.
If you don’t own it, it doesn’t belong in mainline.
Tools are fine. Epistemic outsourcing is not.

Comment AI finds the needle in the haystack (Score 3, Interesting) 50

So it looks like the editor sprinkled the magic word “AI” on the headline and slashdot did what it always does: auto-spawned 200 trollish variations of “lol AI hype.” Cute. Meanwhile, the actual story here is a lot more practical than the kneejerk anti-AI trolls realize.

What Zanskar is using machine-learning models to spot blind hydrothermal systems: reservoirs with no obvious surface tells. No hot springs, no geysers, no “hey look, free steam!” signpost. In other words: no leaking clues. Humans have traditionally hunted where the geology is loud. Zanskar is trying to hear the quiet stuff, then drill to confirm.

And once you confirm it, the extraction is basically the normal geothermal playbook: Find and a likely spot->drill production well(s)->bring up hot fluid->strip heat at the surface->reinject the cooled fluid back underground.

So the key tradeoff vs traditiona” geothermal isn’t new extraction vs old extraction. AI reduces exploration risk (fewer dry holes), but you still face the classic geothermal buildout grind: drilling cost, reservoir management, cooling choice, permitting, interconnection queues, etc. Zanskar’s bet isn’t new thermodynamics. It’s “we can de-risk the needle-in-a-haystack exploration phase."

Now here's the part where Arizona, Nevada, and Utah start side-eyeing the whole thing. If this hydrothermal renaissance turns into “power for data centers, paid for by sucking from the last puddle,” that’s not clean energy. That’s just a different flavor of externalized cost, and those of us living in the waterless paradise that is the desert Southwest get to pick up the tab. The good news: geothermal doesn’t have to be a water vampire. Many systems reinject what they produce. The risk knobs are mostly water recovery -- especially surface evaparoation in wet-cool loops, which are the most likely (read: highest profit margin) system designs,

My hope, and my ask, for anyone deploying this in the West is to be smart about it. Treat potable groundwater as off-limits unless there’s no alternative and it’s transparently justified. Prioritize closed-loop/reinjection-heavy designs and aggressive leak accounting. Use non-potable sources for any makeup water (brackish, treated, industrial) whenever possible, and pick cooling systems with the desert reality in mind, not the pie-in-the-sky brochure pitch to investors. Last but not least -- monitor and disclose environmental impacts like you actually have to live here afterward.

Using AI to find the needle in the hydrothermal haystack? Absolutely. That’s a sensible tool applied to a hard search problem. Just don’t let “AI found clean energy” become the preface to “and then we sucked your groundwater reserves dry to power yet another chip fab/AI datacenter in Phoenix.

Slashdot Top Deals

You may call me by my name, Wirth, or by my value, Worth. - Nicklaus Wirth

Working...